Conversation
|
@karpathy One question that came to mind while reading the experiment loop: the system is elegantly optimized for incremental hill-climbing: agents propose a change, it improves val_bpb, gets pushed, becomes part of the lineage. Clean and effective. Letting agents maintain side branches that are temporarily worse but flagged as speculative Otherwise I wonder if the system converges strongly to local optima over time — not because agents are bad, but because the incentive structure only rewards the next incremental step. |
|
repo was deleted and forks point to https://github.com/ygivenx/agenthub |
|
multiobjective model optimization: max accuracy, min parameters, min iteration time |
|
Autoresearch and AgentHub made me think a lot to evolution and a gene pool; it feels like there are shared characteristics between the two. In a gene pool there is no single 'main' branch lots of tracks are going at once in different directions each trying to find some new, better path. Similar to the vision of AgentHub, there is sharing and swapping of genes between these tracks (analogous to the sharing and swapping of commits on these branches). There is also no notion of a 'merge back into main' each track is going independently, some will fail and end, others will continue on and become the de facto 'best track' for a time. I wonder if there is some design that builds on this proven strategy? Or maybe this is just an indication that the current design is a good one. |
Call for help/discussion on autoresearch integration into AgentHub. I have an early version deployed on autoresearchhub.com. The new
program.mdI am using for my first agent is below.After some iteration I might push to master of autoresearch. Just want to iterate on first a bit more and think it through a bit.